Improved Generalization via Tolerant Training

نویسنده

  • W. Nick Street
چکیده

Theoretical and computational justiication is given for improved generalization when the training set is learned with less accuracy. The model used for this investigation is a simple linear one. It is shown that learning a training set with a tolerance improves generalization, over zero-tolerance training, for any testing set satisfying a certain closeness condition to the training set. These results, obtained via a mathematical programming formulation, are placed in the context of some well-known machine learning results. Computational connrmation of improved generalization is given for linear systems (including nine of the twelve real-world data sets tested), as well as for nonlinear systems such as neural networks for which no theoretical results are available at present. In particular, the tolerant training method improves generalization on noisy, sparse, and over-parameterized problems.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

On a multiple nodes fault tolerant training for RBF: Objective function, sensitivity analysis and relation to generalization

Over a decades, although various techniques have been proposed to improve the training of a neural network to against node fault, there is still a lacking of (i) a simple objective function to formalize multiple nodes fault and not much work has been done on understanding of the relation between fault tolerant and generalization. In this paper, an objective function based on the idea of Kullbac...

متن کامل

Fault tolerant learning for neural networks : Survey, framework and future work

While conventional learning theory focus on training a neural network to attain good generalization, fault tolerant learning aims at training a neural network to attain acceptable generalization even if network fault might appear in the future. This paper presents an extensive survey on the previous work done on fault tolerant learning. Those analytical works that have been reported in the lite...

متن کامل

Fault-tolerant training for optimal interpolative nets

The optimal interpolative (OI) classification network is extended to include fault tolerance and make the network more robust to the loss of a neuron. The OI net has the characteristic that the training data are fit with no more neurons than necessary. Fault tolerance further reduces the number of neurons generated during the learning procedure while maintaining the generalization capabilities ...

متن کامل

A Shift Tolerant Dictionary Training Method

Traditional dictionary learning method work by vectorizing long signals, and training on the frames of the data, thereby restricting the learning to time-localized atoms. We study a shift-tolerant approach to learning dictionaries, whereby the features are learned by training on shifted versions of the signal of interest. We propose an optimized Subspace Clustering learning method to accommodat...

متن کامل

Instance Pruning Techniques

The nearest neighbor algorithm and its derivatives are often quite successful at learning a concept from a training set and providing good generalization on subsequent input vectors. However, these techniques often retain the entire training set in memory, resulting in large memory requirements and slow execution speed, as well as a sensitivity to noise. This paper provides a discussion of issu...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 1998